![]() Surround surveillance apparatus for mobile body
专利摘要:
PURPOSE: A surround surveillance apparatus for a mobile body is provided to survey surroundings around the mobile body. CONSTITUTION: A surround surveillance apparatus mounted on a mobile body includes an omni-azimuth visual system(200). The omni-azimuth visual system(200) includes at least one visual sensor(210) including an optical system(212) capable of obtaining an image with an omni-azimuth view field area therearound and capable of central projection transformation of the image into an optical image, and an imaging section(214) including an imaging lens(216) for converting the optical image obtained by the optical system(212) into image data, an image processor(220) for transforming the image data into at least one of panoramic image data and perspective image data, a display section(240) for displaying one of a panoramic image corresponding to the panoramic image data and a perspective image corresponding to the perspective image data and a display control section(250) for controlling the display section(240). 公开号:KR20020028853A 申请号:KR1020010062637 申请日:2001-10-11 公开日:2002-04-17 发明作者:쿠마타키요시;카코노리토시 申请人:마찌다 가쯔히꼬;샤프 가부시키가이샤; IPC主号:
专利说明:
Peripheral Surveillance Device for Moving Objects {SURROUND SURVEILLANCE APPARATUS FOR MOBILE BODY} [16] The present invention relates to a perimeter monitoring system for examining the periphery of a moving object. That is, the present invention relates to a peripheral monitoring device for examining the surroundings of a moving object for cargo transportation such as humans and automobiles, trains and the like. [17] Recently, the increase in traffic accidents has become a major social problem. In particular, many accidents often occur in places where a plurality of roads intersect (T-intersection, crossroads, etc.). For example, people rush to the road where the car runs, and the car crashes or collides with other cars. In general, such an accident is because the visibility of both the driver and the pedestrian is limited to the intersection area, and many drivers and pedestrians do not pay attention to their surroundings, so they cannot quickly recognize the danger. Therefore, there is a great demand for improvement of the vehicle itself, attention of the driver, improvement and maintenance of the traffic environment, and the like. [18] Conventionally, for the purpose of improving the traffic environment, mirrors are installed at appropriate locations in the intersection area so that drivers and pedestrians can see blind spots behind obstacles. However, the field of view of the blind spot that can be covered by the mirror is limited, and also a sufficient number of mirrors could not be installed. [19] In recent years, many large vehicles, such as buses and some passenger cars, have a monitoring system for checking the safety of the surroundings, especially at the rear of the vehicle. The system includes a surveillance camera installed at the rear of the vehicle and a monitor provided near the driver's seat or in the instrument cluster. The monitor is connected to the surveillance camera via a cable. The image captured by the surveillance camera is displayed on the monitor. However, even in such a surveillance system, the driver must use vision to confirm safety on both sides of the vehicle. Therefore, at intersections where blind spots exist due to obstacles, the driver often cannot recognize the danger quickly. In addition, a single camera of this type has a limited field of view that can detect obstacles and predict collisions in only one direction. In order to check for the presence of obstacles and to predict the risk of collision over a wide range, certain manipulations, such as changing the camera angle, are required. [20] Since the main purpose of the conventional ambient monitoring system for automobiles is one-way monitoring, a plurality of cameras are required to monitor the 360 ° area around the vehicle; That is, it is necessary to provide four or more cameras so as to provide at least one camera in each of the front, rear, left and right sides of the vehicle. [21] In addition, when the vehicle is used at a place or a time when the ambient temperature is reduced to or below a certain temperature, such as a high-altitude region, a high latitude region, or winter, dew is formed on the window glass of the vehicle for a certain time after the vehicle is started and then frozen. Can be. These dew or frozen dew, or other factors, blur the windowpanes and make it difficult for the driver to look outward from the inside of the car. For example, when a driver parks a car at the edge of a road, in many cases the driver's car is in proximity to another car or person. In the above situation, when the driver starts to drive the vehicle, the driver cannot sufficiently grasp the surrounding situation if the fog of the window is not sufficiently removed or evaporated by hot air. [22] Of course, when using a car, the driver faces various situations required to ensure safety around the car. For example, when starting driving, the driver should check the safety of the left and right and rear of the vehicle as well as the front. Naturally, when the car turns right or left, or when parking in or leaving the garage, the driver must check the safety around the vehicle. However, due to the appearance and structure of the vehicle, there are blind spots, that is, areas in which the driver cannot see immediately and / or around the car, and it is difficult for the driver to check the safety of such blind spots. As a result, the blind spot is a significant burden on the driver. [23] In addition, when using a conventional ambient monitoring system, it is necessary to provide a plurality of cameras to confirm the safety of the 360 ° area around the vehicle. In this case, the driver must selectively switch the camera or change the direction of the selected camera according to the situation in order to confirm safety around the vehicle. This operation puts a considerable burden on the driver. [24] In addition, if the windshield of the car is blurred and it is difficult for the driver to see the outside from the inside, the driver can increase the temperature inside the car and wait until the window fog is removed so that the driver can check the safety of the surroundings manually using vision. Remove the fog from the windshield. In this case, the driver can start the car in a more reliable manner if there is a means by which the driver can assist the driver to check the safety around the vehicle instead of manually operating it using vision. [25] According to one aspect of the present invention, a perimeter monitoring system mounted on a moving object to monitor a perimeter around a moving object includes an omnidirectional vision system, wherein the omnidirectional vision system is capable of obtaining an image over an surrounding omnidirectional field of view and the image. At least one omnidirectional field sensor comprising an optical system capable of center-projectionally converting the optical image into an optical image, and an image unit having an imaging lens for converting the optical image obtained by the optical system into image data; Image processing means for converting into at least one of the perspective image data, a display unit for displaying one of the panoramic image corresponding to the panoramic image data, and a perspective image corresponding to the perspective image data, and a display control unit controlling the display unit. . The optical system includes a hyperbolic mirror having a shape of one of two sheet hyperbolic surfaces, an optical axis of the hyperbolic mirror coincides with an optical axis of an imaging lens, and a principal point of the imaging lens is the hyperbolic mirror Is placed in one of the focal points. The display unit displays a perspective image converted from the bird's-eye view image of the moving object and its surroundings. [26] In one embodiment of the present invention, the at least one omnidirectional visual sensor is arranged so that the bird's eye view image of the entire moving object and its surroundings is converted into image data. [27] In one embodiment of the present invention, the display unit simultaneously or selectively displays the panoramic image and the perspective image. [28] In one embodiment of the invention, the display unit displays an image shown in a direction opposite to the most likely moving direction of the moving object. [29] In one embodiment of the present invention, the image processing means converts the image data corresponding to the first area in the omnidirectional field of view around the optical system into first perspective image data. [30] In one embodiment of the present invention, in response to the control by the display control unit, the image processing unit displays image data corresponding to the second area in the omnidirectional field of view around the optical system that does not overlap with the first area. Conversion is made into second perspective image data that does not match the data. [31] In one embodiment of the present invention, the second region is the same as the region obtained by performing at least one of a translational movement process and a zoom-in / zoom-out process on the first region. [32] In one embodiment of the invention, the optical system is arranged such that its optical axis is perpendicular to the most likely direction of movement of the moving body. [33] In an embodiment of the present invention, in response to the control by the display control unit, the display unit causes an image of the mobile unit to be displayed on the display screen of the display unit such that the mobile unit is shown at a predetermined position of the image displayed on the display screen. do. [34] When displaying the moving object and the bird's-eye perspective view image of the surroundings, it is preferable that the display portion displays the image so that the moving object appears at a predetermined position on the display screen. [35] According to the present invention, the display unit displays a bird's-eye perspective view image of the moving object and its surroundings. In this case, when the display portion displays the entire moving object and a bird's eye view of the surroundings, the operator of the moving object can simultaneously check the surroundings of the moving object in all directions. [36] When the omnidirectional visual sensor is capable of directly converting the entire moving object and the bird's-eye view image of the surroundings into image data directly, the display portion is the bird's-eye perspective view image of the whole moving object and its surroundings converted from the image data derived from the omnidirectional vision sensor. Can be displayed. [37] However, in some cases, there are times when the omnidirectional vision sensor cannot directly convert the entire moving object and the bird's-eye view image around it into image data. For example, when the omnidirectional visual sensor is located at a position higher than the vehicle's main body (or loop), the omnidirectional visual sensor can obtain a bird's eye view image of the entire vehicle and its surroundings viewed directly from the position directly above the vehicle. However, when the omnidirectional visual sensor is located at a lower position than the loop of the vehicle, a part of the field of view of the omnidirectional visual sensor is blocked by the vehicle so that a bird's eye view image of the entire vehicle and its surroundings cannot be obtained. [38] In this case, a plurality of perspective images obtained through a plurality of omnidirectional vision sensors can be combined to display the entire vehicle and a single bird's eye view of the surroundings. This configuration allows the operator of the vehicle body to check the periphery of the moving body in all directions at the same time. [39] Alternatively, vehicle image data representing a planar image (a bird's eye view image) of a vehicle previously captured at a position directly above the vehicle is stored in a storage unit of the image processing means, and the vehicle image data stored in the storage unit is omnidirectional. By combining with the perspective image data obtained by inverting the image obtained through the sensor, the display unit can display a perspective image representing the entire vehicle at a predetermined position on the display screen. [40] Alternatively, image data of a previously formed planar image (bird's eye view image) of the vehicle may be stored in a storage unit of the image processing means using computer graphics software, and the previously formed image data is derived from the omnidirectional visual sensor. By being combined with the perspective image data obtained by converting the image, the display unit can display the combined image representing the entire vehicle at a predetermined position on the display screen. [41] In this way, the prepared image of the moving object obtained through the omnidirectional visual sensor and the perspective image representing the moving object and its surroundings can be combined into the image to be displayed. From this combined image, the operator of the moving object can immediately grasp the relative distance between the moving object and the object around it. By using the previously captured image of the moving object, the operator of the moving object can immediately grasp the distance between the moving object and the object around it more visually as it is. [42] In one embodiment of the present invention, the display unit simultaneously displays an image viewed in a direction opposite to the most likely moving direction of the moving object and an image viewed in a direction that is not the same or opposite to the most likely moving direction of the moving object. Display. [43] In one embodiment of the invention, the moving body is a vehicle. [44] In one embodiment of the invention, the vehicle comprises a first bumper provided on the side of the most probable direction of movement of the vehicle and a second bumper provided on the vehicle part or side of the side opposite to the most probable direction of movement of the vehicle. . The at least one omnidirectional visual sensor comprises a first omnidirectional visual sensor provided on the first bumper and a second omnidirectional visual sensor provided on the second bumper. [45] In one embodiment of the invention, the first omnidirectional vision sensor is disposed at one of the right end and the left end of the first bumper with respect to the most likely moving direction of the vehicle. The second omnidirectional visual sensor is disposed on one end of the second bumper that is diagonal to the end of the first bumper where the first omnidirectional visual sensor is disposed relative to the body of the vehicle. [46] In one embodiment of the present invention, the display unit displays an image obtained by combining a first perspective image derived from the first omnidirectional visual sensor and a second perspective image derived from the second omnidirectional visual sensor. [47] In one embodiment of the present invention, the image processing unit includes a storage unit for storing image data of the moving object, and the image processing unit combines the image data of the moving object in the storage unit and the perspective image data derived from the optical system, The display unit displays a perspective image including an image representing a moving object in accordance with the combined image data. [48] In one embodiment of the present invention, the image data of the moving object is image data formed using computer graphics software. [49] In one embodiment of the present invention, the image data of the moving object is image data obtained by capturing an image of the moving object. [50] In one embodiment of the present invention, the omnidirectional vision system further comprises a temperature measuring unit for measuring the ambient temperature of the moving object, when the ambient temperature measured by the temperature measuring unit is less than a predetermined temperature, the display unit is movable After that, a perspective bird's eye view image of the moving object and its surroundings is displayed. [51] In one embodiment of the present invention, the display area of the moving object obtained through the first omnidirectional visual sensor and the perspective bird's eye view image of the surroundings and the moving object obtained through the second omnidirectional visual sensor and the perspective bird's eye view around it When displaying a perspective image of an overlapping region between display regions of an image, the display section displays a perspective image derived from one of the first omnidirectional visual sensor and the second omnidirectional visual sensor under control by a display control unit. [52] In addition, when the ambient temperature of the movable body is lower than the predetermined temperature, the display section displays a bird's eye view image of the movable body and its surroundings after the movable body becomes movable. In order to obtain a bird's eye view of the moving object and its surroundings, the omnidirectional visual sensor is arranged such that its viewing angle is perpendicular to the most likely direction of movement. When the moving object is a vehicle, the omnidirectional visual sensor is arranged to obtain a perspective image from a direction that is 90 ° from the horizontal plane (direction perpendicular to the horizontal plane downward). When the perspective image obtained by converting the image obtained through this omnidirectional visual sensor is displayed, the operator of the moving object can immediately check the safety around the moving object. [53] In this specification, the optical image which is subjected to the center perspective conversion by the optical system is regarded as an image around the optical system viewed from one of the plurality of focal points of the optical system. [54] Hereinafter, the function of the present invention will be described. [55] According to the present invention, the optical system of the omniazimuth vision sensor is capable of converting the central projection of the image around it. The optical system includes, for example, a hyperbolic mirror having a form of one of two hyperbolic surfaces. In the optical system, the optical axis of the hyperbolic mirror coincides with the optical axis of the imaging lens included in the imaging unit of the omnidirectional visual sensor, and a principal point of the imaging lens is located at one of the focal points of the hyperbolic mirror. [56] The optical image obtained through the optical system is converted into image data by the imaging unit, and the image data is converted into at least one of panoramic image data and perspective image data. [57] The display unit displays at least one of the panoramic image according to the panoramic image data and the perspective image according to the perspective image data. [58] The optical image obtained by the imaging section is considered to be an image seen from one of the focal points of the optical system. Therefore, the optical image can be converted into a panoramic image or a perspective image by performing coordinate transformation from polar coordinates to rectangular coordinates. [59] The selection of the image to be displayed or the selection of the image size to be displayed is performed by the display control unit. [60] In one embodiment, the omnidirectional visual sensor is positioned so that perspective bird's-eye image data around the moving object and surroundings can be obtained. When the moving object is a vehicle, generally the bird's-eye view image of the vehicle and its surroundings can be displayed by moving the viewing direction of the perspective image obtained through the omnidirectional vision sensor so that it becomes 90 ° with respect to the horizontal plane. [61] In the above configuration, the display portion can display a perspective bird's eye view image of the moving object and its surroundings. [62] As a result, the driver does not need to switch or change the direction of the camera to select one of the plurality of cameras, unlike the conventional vehicle monitoring apparatus in which each camera is designed for one-way monitoring. [63] The display unit displays a perspective image of the area in the surrounding area of the omnidirectional visual sensor that is opposite to the direction in which the moving object is most likely to move. [64] On the other hand, in response to control by the display control unit, the display unit (vertical and / or horizontal) translational movement processes (tilt / pan processing) and zoom-in / zoom-out processing are performed. The image obtained by performing at least one of them can be displayed. [65] Thus, the function requires the driver to check the distance between his vehicle and an adjacent vehicle or obstacle: for example, especially when the driver parks in or leaves the garage, or when the driver places the vehicle at the edge of the adjacent vehicle or obstacle. This is useful when parking or stopping as close as possible. [66] In the present specification, the "zoom" operation refers to either an enlargement or a reduction operation. [67] When the optical system is positioned so that the optical axis of the optical system is perpendicular to the direction in which the moving object is most likely to move, the perspective image obtained by converting the image captured by the optical system will be a bird's eye view image of the entire moving object seen from directly above the moving object. Can be. In this case, for example, the driver can easily determine the distance between his vehicle and the adjacent vehicle or obstacle when parking or leaving the garage, or when parking or stopping as close as possible to the edge of the adjacent vehicle or obstacle. have. Even if the optical axis of the optical system is not perpendicular to the horizontal plane, a predetermined image can be obtained by changing the viewing direction of the perspective image obtained through the omnidirectional visual sensor so as to be perpendicular to the horizontal plane downward. [68] Further, when the display portion displays the perspective image of the moving object in response to control by the display control unit, the perspective image may be shifted so as to appear at a predetermined portion (eg, the center of the displayed perspective image) on which the moving object is displayed. . In this case, the driver of the movable body can easily recognize the surroundings. [69] Further, when an image of a moving object captured or pre-generated using computer graphics software is displayed at a predetermined position on the display screen of the display unit, the operator of the moving object facilitates the distance relationship between the moving object and an object (obstacle, etc.) around it. Can be recognized. [70] In addition, the display unit simultaneously displays an image viewed in a direction opposite to the direction in which the movable body is most likely to move and an image viewed in a direction not coincident with or opposite to the direction in which the movable body is most likely to move. With the above configuration, the driver can easily identify the area that can be viewed in a direction different from the direction in which the most likely movement is possible. In general, the operator of the moving object faces the direction in which movement is most likely. Therefore, it is important from a safety point of view to identify an area that can be seen in a direction different from the direction in which movement is most likely. [71] For example, when the vehicle is a vehicle, one of the two omnidirectional vision sensors is located in the front bumper and the other is located in the rear bumper. In particular, when one omnidirectional visual sensor is located in either the front right corner or the front left corner of the vehicle, and the other is located in the diagonal rear corner of the vehicle, an image with about 360 ° field of view around the entire moving object is displayed. It can be obtained in the vicinity of an area very close to the vehicle in the blind area of the vehicle. [72] In addition, the display unit is a perspective bird's eye view image of the vehicle obtained from the omnidirectional vision sensor disposed in the front corner of the vehicle, and a perspective bird's eye view of the vehicle obtained from another omnidirectional vision sensor disposed at the diagonal rear corner of the vehicle. The perspective image obtained by combining the images is displayed. With the above configuration, images around the entire vehicle can be displayed on one display screen. Thus, the driver can easily check the safety around the vehicle. [73] In addition, when the ambient temperature is reduced to a certain temperature or lower in winter, for example, in a high-altitude region or a high-latitude region, it is difficult for the operator of the moving object to look outward from the inside (fog is formed on the glass of the vehicle). Case), the display portion displays a perspective bird's eye view image of the movable body and its surroundings after the movable body is moved, whereby the operator of the movable body can easily confirm the safety around the movable body. For example, when the moving object is a vehicle, the image around the vehicle can be displayed by moving the viewing direction of the perspective image obtained through the omnidirectional visual sensor downward so that the image around the vehicle becomes 90 ° in the horizontal plane. [74] Accordingly, the present invention described below provides (1) an ambient monitoring device for easily observing around a moving body to reduce the burden on the user and improve safety around the moving body, and (2) the operator of the moving body moves the moving body. It has the advantage of providing an ambient watch value that allows you to quickly grasp the situation around your throat. [75] These and other advantages of the present invention will become apparent to those skilled in the art upon reading and understanding the following detailed description with reference to the accompanying drawings. [1] 1A is a plan view showing the configuration of a vehicle incorporating an ambient monitoring device according to Embodiment 1 of the present invention. FIG. 1B is a side view of the vehicle of FIG. 1A. FIG. [2] Fig. 1C is a plan view showing a modified structure of a vehicle incorporating an ambient monitoring device according to Embodiment 1 of the present invention. FIG. 1D is a side view of the vehicle of FIG. 1C. FIG. [3] Fig. 2 is a block diagram showing the configuration of the surrounding monitoring device according to the first embodiment. [4] 3 shows a typical configuration of an optical system used in the ambient monitoring apparatus according to the first embodiment. [5] Fig. 4A is a block diagram showing the structure of an image processing unit used in the ambient monitoring apparatus according to the first embodiment. [6] Fig. 4B is a block diagram showing the configuration of an image conversion unit used in the ambient monitoring apparatus according to the first embodiment. [7] 5 shows an example of panorama (360) image conversion according to the first embodiment. 5A shows an input circular image. Fig. 5B shows a toroidal image which has undergone panorama image conversion. 5C shows a panoramic image converted to rectangular coordinates. [8] 6 illustrates a perspective transformation in the ambient monitoring apparatus according to the first embodiment. [9] 7 shows an example of a display screen of a display portion in the surroundings monitoring apparatus according to the first embodiment. [10] 8 shows another example of the display screen of the display unit in the surroundings monitoring apparatus according to the first embodiment. [11] Fig. 9A is a plan view showing a vehicle including the ambient supervision device for a moving body according to the second embodiment of the present invention. 9B is a side view of the vehicle shown in FIG. 9A. [12] 10 shows an example of a display screen of a display portion in the surroundings monitoring apparatus according to the second embodiment. [13] 11 shows a divided area of a display screen of a display unit in the surroundings monitoring apparatus according to the second embodiment. [14] Fig. 12A is a plan view showing the configuration of a vehicle incorporating an ambient monitoring device according to a third embodiment of the present invention. 12B is a side view of the vehicle of FIG. 12A. [15] Fig. 13 is a block diagram showing the configuration of the surrounding monitoring device according to the third embodiment. [76] Hereinafter, embodiments of the present invention will be described with reference to the drawings. [77] (First embodiment) [78] FIG. 1A is a plan view showing the structure of a movable body 100 equipped with a periphery monitoring apparatus 200 according to a first embodiment of the present invention. FIG. 1B is a side view illustrating the movable body 100 of FIG. 1A. [79] In the first embodiment, a vehicle is described as a specific example of the movable body 100. [80] In the first embodiment, the vehicle 100 is equipped with a peripheral monitoring device 200 for a moving body. As illustrated in FIGS. 1A and 1B, the ambient monitoring apparatus 200 includes an omnidirectional visual sensor 210 and an operation / control unit 220. The omnidirectional visual sensor 210 is disposed on the roof of the vehicle 100. The operation / control unit 220 is provided, for example, near the driver's seat of the vehicle 100. [81] The omnidirectional visual sensor 210 shown in FIGS. 1A and 1B typically has an omnidirectional field of view with a 360 ° field of view around it in the horizontal direction. [82] 1C is a plan view showing the structure of a movable body 100A equipped with a periphery monitoring apparatus 200A according to a first embodiment of the present invention. FIG. 1D is a side view showing the movable body 100A of FIG. 1C. The vehicle 100A is equipped with a peripheral monitoring device 200A for a moving object. As illustrated in FIGS. 1C and 1D, the ambient monitoring apparatus 200A includes a first omnidirectional visual sensor 210A, a second omnidirectional visual sensor 210B, and an operation / control unit 220. The first omnidirectional visual sensor 210A is disposed in front of the vehicle 100A (front side of the vehicle 100A). The second omnidirectional visual sensor 210B is disposed behind the vehicle 100A (rear side of the vehicle 100A). The operation / control unit 220 is provided near the driver's seat of the vehicle 100A. [83] The vehicle 100A further includes a front bumper 110 and a rear bumper 120. [84] In the first embodiment, the first omnidirectional vision sensor 210A is disposed at the center portion of the front bumper 110, and the second omnidirectional vision sensor 210B is disposed at the center portion of the rear bumper 120. Each of the first omnidirectional visual sensor 210A and the second omnidirectional visual sensor 210B typically has an omnidirectional field of view having a 360 ° field of view around it. [85] However, half of the field of view (about 180 ° rearward viewing angle) of the first omnidirectional visual sensor 210A is blocked by the vehicle 100A. That is, the effective field of view of the first omnidirectional visual sensor 210A is limited to the 180 ° front field of view (from the left side to the right side of the vehicle 100A). Similarly, half of the field of view of the second omnidirectional visual sensor 210B (about 180 ° forward field of view) is blocked by the vehicle 100A. That is, the effective field of view of the second omnidirectional visual sensor 210B is limited to the 180 ° rearward field of view (from the left side to the right side of the vehicle 100A). [86] 2 is a block diagram showing the configuration of the ambient monitoring apparatus 200 according to the first embodiment. [87] The ambient monitoring apparatus 200 includes: an omnidirectional visual sensor 210 for converting an image obtained around the omnidirectional visual sensor 210 into image data; And an operation / control unit 220 for processing the image data converted by the omnidirectional visual sensor 210 and displaying an image corresponding to the processed image data. The peripheral monitoring device 200A shown in FIGS. 1C and 1D has almost the same function as the surrounding monitoring device 200 except for including two omnidirectional visual sensors. In addition, each of the first omnidirectional visual sensor 210A and the second omnidirectional visual sensor 210B shown in FIGS. 1C and 1D has substantially the same function as the omnidirectional visual sensor 210. [88] The omnidirectional vision sensor 210 includes: an optical system 212 capable of obtaining an image having a peripheral field of view and centering the image on the image; And an imaging unit 214 for converting the image obtained by the optical system 212 into image data. The imaging unit 214 includes an imaging lens 216; A light receiving unit 217 for receiving a center projected optical image; And an image data generator 218 for converting the optical image received by the light receiver 217 into image data. [89] The calculation / control unit 220 includes: an image processing unit 230 for converting the image data converted by the imaging unit 214 into at least one panoramic image data and perspective image data; A display unit 240 for displaying an output 236 from the image processing unit 230; And based on the output 238 from the image processing unit 230 and / or the input 254 supplied from the outside, the selection and display unit 240 among the images around the vehicle 100 (FIGS. 1C and 1D). And a display control unit 250 for controlling the size of the selected image to be displayed on the screen. The image processor 230 outputs an output 262 to the alarm generator 260 in order for the alarm generator 260 to generate an alarm when necessary. The image processor 230 includes an image converter 232, an output buffer memory 234, and a memory 235. The display unit 240 displays at least one of a panoramic image corresponding to the panoramic image data and a perspective image corresponding to the perspective image data. The storage unit 235 stores data for an image processing process performed by the image processing unit 230. For example, the storage unit 235 stores a bird's-eye image of the vehicle 100 or the vehicle 100A captured from the right side of the vehicle 100 or the upper portion of the vehicle 100A. [90] For example, the image processing unit 230 may be disposed at the engine compartment at the front end of the vehicle 100 or at the baggage unit at the rear end of the vehicle 100. The display unit 240 and the display control unit 250 may be disposed inside or next to the front panel near the driver's seat. [91] Hereinafter, the respective elements will be described in detail. [92] 3 shows a specific example of an optical system 212 capable of center projection conversion. [93] Here, the optical system 212 includes a hyperbolic mirror 310 having a shape of one of two hyperbolic surfaces. The imaging lens 216 and the hyperbolic mirror 310 have the same optical axis 314 of the imaging lens 216 in which the optical axis (z-axis) 312 of the hyperbolic mirror 310 is included in the imaging unit 214. Is arranged to. The first principal point 215 of the imaging lens 216 is located at one of the focal points of the hyperbolic mirror 310. In such a structure, the central projection transformation is possible. That is, the image obtained by the imaging unit 214 corresponds to the image around the hyperbolic mirror 310, which is seen from the focal point 1 of the hyperbolic mirror 310. The optical system 212 having such a configuration is disclosed, for example, in Japanese Patent Application Laid-Open No. 94-295333, and only some features of the optical are described below. [94] In Fig. 3, the hyperbolic mirror 310 is the convex of the body defined by one of the curved surfaces (two hyperbolic surfaces) obtained by rotating the hyperbolic curve around the z-axis. It can be formed by providing a mirror in the region of two hyperboloids. The two hyperboloids can be defined as follows: [95] [96] Here, a and b are constants for defining the shape of the hyperboloid, and c is a constant for defining the focus of the hyperboloid. Hereinafter, the constants a, b and c are generally referred to as "mirror constants". Since the hyperbolic mirror 310 has a curved surface obtained by rotating the hyperbolic curve, the axis of rotation of the hyperbolic mirror is the same as the optical axis 312 of the hyperbolic mirror 310. [97] The hyperbolic mirror 310 has two focal points ① and ②. All light moving from the outside to one of the foci (in this case, focal point ①) is reflected by the hyperbolic mirror 310 to reach the other focal point (in this case focal ②). The hyperbolic mirror 310 and the imaging unit 214 are arranged such that the optical axis 312 of the hyperbolic mirror is the same as the optical axis 314 of the imaging lens 216 of the imaging unit 4b, and the imaging lens 216 The first main point 215 of is located at the focal point ②. By such a configuration, the image obtained by the imaging unit 214 always corresponds to an image that can be seen from the focus Of the hyperbolic mirror 310 and can be converted into image data, regardless of the viewing direction. In this case, the image in the hyperbolic mirror 310 cannot be obtained. In addition, since the imaging section 214 has any size, the imaging section 214 does not receive light blocked by the imaging section 214 from reaching the focal point ① of the hyperbolic mirror 310. [98] The imaging unit 214 may be a video camera. The imaging unit 214 converts the optical image obtained through the hyperbolic mirror 310 of FIG. 3 into image data using a solid-state imaging device such as a CCD or a CMOS. The converted image data is transmitted to the image processing unit 230. [99] 4A is a block diagram showing the structure of the image processing unit 230. [100] The image processor 230 includes an image converter 232 and an output buffer memory 234. The image converter 232 includes an A / D converter 410, an input buffer memory 420, a CPU 430, a lookup table (LUT) 440, and an image conversion logic 450. Each element of the image conversion unit 232 is connected to the output buffer memory 234 via a bus line 460. [101] 4B is a block diagram showing the structure of the image conversion unit 232. [102] The image conversion unit 232 receives image data obtained by converting the optical image obtained by the imaging unit 214. If the image data is an analog signal, the analog signal is converted by the A / D converter 410 and the digital signal is transmitted to the input buffer memory 420. If the image data is a digital signal, the image data is transferred directly to the input buffer memory 420. [103] The image conversion unit 232 performs image processing on the output (image data) from the input buffer memory 420 when necessary. For example, image conversion logic 450 may convert the image data into at least one panoramic image data and perspective image data, or translate or enlarge / reduce an image to be displayed (vertically / horizontally). 440). After the image processing, the processed image data is input to the output buffer memory 234 shown in Fig. 4A. In processing, the elements are controlled by the CPU 430. The CPU 430 may be a reduced instruction set computer (RISC) or a complex instruction set computer (CISC). [104] The principle of image conversion by the image conversion logic 450 will be described. Image conversion includes a panorama conversion for obtaining a panorama (360 °) image and a perspective conversion for obtaining a perspective image. In addition, the perspective image can be horizontally rotated and transferred (horizontal transfer, so-called "fan movement"), and rotated vertically and transferred (vertical transfer, so-called "tilt movement"). In this specification, vertical rotational transfer is referred to as "translational transfer". [105] First, the panorama (360) image conversion will be described with reference to FIG. Referring to part a of FIG. 5, the image 510 is a circular image obtained by the imaging unit 214. Part b of Fig. 5 shows a toroidal image 515 in which panoramic image conversion has been performed. Part C of FIG. 5 shows a panoramic image 520 obtained by converting the image 510 into quadrature coordinates. [106] Part a of FIG. 5 shows an input circular image 510 which is formatted in polar coordinate format in which the center of the image 510 is disposed at the origin Xo, Yo of coordinates. In the polar coordinates, the pixel P of the image 510 is represented by P (r, θ). Referring to part c of FIG. 5, in the panoramic image 520, a point corresponding to the pixel P of the image 510 (part a of FIG. 5) is represented by P2 (x, y). Using the point PO (ro, θo) as a reference point, converting the circular image 510 shown in part a of FIG. 5 to the square panoramic image 520 shown in part c of FIG. 5C, the conversion is Is represented by: [107] x = θ- θo [108] y = r-ro [109] If the input circular image 510 (part a of FIG. 5) is formatted in quadrature coordinates such that the midpoint of the circular image is located at the origin Xo, Yo of the quadrature coordinate system, the point P of the image 510 is formed. Is represented by (X, Y). Thus, x and y are expressed as: [110] [111] therefore, [112] [113] The pan / tilt motion of the panoramic image can be realized by changing the position of the reference point PO (ro, O) to another position. The fan motion is realized by changing the value of "θo". However, in the first embodiment, no tilt movement is performed because the obtained image leaves the conversion area. [114] Next, the perspective transformation will be described with reference to FIG. In the perspective transformation, the position of the point of the input optical image obtained by the light receiving unit 217 of the imaging unit 214 corresponding to the point in the three-dimensional space is calculated, and the image information at the point of the input optical image is converted into perspective. The orthogonal transformation is performed by assigning to the upper point. [115] In particular, as shown in Fig. 6, the point in the three-dimensional space is represented by P3, and the point on the circular image formed on the light receiving surface of the light receiving portion 217 of the imaging unit 214 corresponds to R (r, θ), and the focal length of the imaging lens 216 is F. The light receiving portion 217 is disposed at a position far from the imaging lens 217 by the focal length F. As shown in FIG. The mirror constant of the hyperbolic mirror 310 is (a, b, c), which is the same as a, b, c in FIG. By the above parameters, equation (2) is obtained: [116] [117] In Fig. 6, Is the incident angle of light moving from the object point (point P3) toward the focal point ① with respect to the horizontal plane including the focal point 1; β is the angle of incidence of the light exiting from point P3 and reflected at the point on the hyperbolic mirror 310 and entering the imaging lens 216 (angle β is not the angle with respect to the optical axis 314 of the imaging lens 216). Angle with respect to the surface of the imaging lens 216 perpendicular to 314). The logarithms α, β, θ are expressed as follows. [118] [119] From the above, equation (2) is expressed as: [120] [121] The coordinates of the point on the circular image 51 are converted into quadrature coordinates R (X, Y). X and Y are expressed as follows: [122] [123] Thus, from the above equation: [124] [125] Next, horizontal rotation transfer and vertical rotation transfer will be described. [126] Further, referring to Fig. 6, consider a quadrangular image plane positioned in a three-dimensional space at a position corresponding to the rotation angle θ around the Z axis 312, having a width W and a height h. Here, R is the distance between the surface of the hyperbolic mirror 310 and the focal point , And φ is the center of the quadrilateral image plane whose depression angle (the same as the incident angle α) is the point P3. The point Q (tx, ty, tz), the parameter of the point in the upper right corner of the quadrangular image plane, is expressed as follows: [127] [128] By combining equations (7), (8) and (9) with equations (5) and (6), they are formed in the light receiving portion 217 of the imaging section 214 corresponding to the point Q of the quadrilateral image plane as follows. We can get the coordinates (X, Y) of the points in the circular image: [129] [130] Further, suppose that the quadrilateral image plane is converted into a perspective image divided into pixels of width n and height m, respectively. In the formulas (7), (8) and (9), the parameter W is changed within the range of W to -W every W / n, and the parameter h is changed within the range of h to -h every h / m, The coordinates of the points of the quadrangular image plane are obtained. According to the coordinates of the points of the obtained quadrilateral image plane, the image data at the points of the circular image formed on the light receiving portion 217 corresponding to the points of the quadrilateral image plane are transferred to the perspective image and displayed on the display unit 240 (FIG. 2). Is displayed on the screen. [131] Next, the horizontal rotation motion (pan motion) and the vertical rotation motion (tilt motion) in the perspective transformation will be described. First, the case where the said point Q is rotating horizontally (fan movement) is demonstrated. The point Q '(tx', ty 'tz'), the coordinate of the point obtained after the horizontal rotational motion, is expressed as: [132] [133] [134] Is the horizontal angle of motion. [135] By combining equations (12), (13), and (14) with equations (5) and (6), the circular image 510 formed in the light receiving portion 217 corresponding to the point Q '(tx', ty'tz '). You can get the coordinates (X, Y) of the above points. This also applies to other points (other than point Q) of the circular image 510. In the formulas (12), (13) and (14), the parameter W is changed within the range of W to -W for every W / n, and the parameter h is changed within the range of h to -h for every h / m, The coordinates of the points of the quadrangular image plane are obtained. Image data at the points of the circular image 510 formed on the light receiving portion 217 corresponding to the point Q '(tx', ty 'tz') according to the coordinates of the points of the obtained quadrangular image plane is a perspective image. Can be transferred to obtain a horizontally rotated image. [136] Next, a description will be given of the case where the point Q is vertically rotating (tilting). The point Q "(tx", ty "tz"), the coordinate of the point obtained after the vertical rotational movement, is expressed as: [137] [138] Is the vertical angle of motion. [139] By combining equations (15), (16), and (17) with equations (5) and (6), the circular image 510 formed in the light receiving portion 217 corresponding to the point Q "(tx", ty "tz"). You can get the coordinates (X, Y) of the above points. This also applies to other points of the circular image. In the formulas (15), (16) and (17), the parameter W is changed within the range of W to -W for every W / n, and the parameter h is changed within the range of h to -h for every h / m, The coordinates of the points of the quadrangular image plane are obtained. In accordance with the coordinates of the points of the obtained quadrangular image plane, the image data at the points of the circular image formed on the light receiving portion 217 corresponding to the point Q "(tx", ty "tz") are transferred to a perspective image. Vertical rotating images can be obtained. [140] In addition, the zoom-in / zoom-out function for the perspective image is realized by one parameter, parameter R. FIG. In particular, if the parameters R in equations (4) to (12) decrease while the parameters W and h are fixed, the field of view from the focal point 1 decreases, and the zoom- obtainable by the zoom-out operation of the optical system is reduced. The same image as the out image is obtained. While the parameters W and h are fixed, if the parameter R in equations (4) to (12) increases, the field of view from the focal point ① increases, and the zoom-in image obtainable by the zoom-in operation of the optical system. The same image as is obtained. [141] For example, consider a case in which the omnidirectional visual sensor 210 is attached to the vehicle 100 such that the optical axis 314 of the image capturing unit 214 is perpendicular to the ground. When the viewing direction of the perspective image is selected by the vertical rotational transfer such that α = -90 °, the obtained perspective view is a bird's eye view of the vehicle 100 and its surroundings viewed from the position directly above the vehicle 100. In this case, by reducing the parameter R as described above, the field of view can be enlarged to obtain a zoom-out field of view, while by increasing the parameter R, a zoom-in field of view can be obtained. In addition, by performing a zoom-in operation (for example, a key operation) under the control of the display control unit 250, a certain area around the omnidirectional visual sensor 210 shown in FIGS. 1A and 1B is shown, and the vehicle 100 The bird's eye view is shown from the position directly above the entire vehicle 100 to cover the downward. [142] In the present specification, the "bird's eye view" is a view seen in a direction perpendicular to the most likely moving direction of the moving object 100 from the position on the moving object 100. [143] In addition, in this specification, the "most likely moving direction" is a direction in which the moving object 100 moves with the highest probability. In general, the movable body 100 is designed in consideration of the most probable direction of movement. In addition, the operator of the movable body 100 generally faces the most probable direction of movement of the movable body 100. [144] Further, in the above example, the perspective view obtained by the omnidirectional visual sensor 210 is on a plane perpendicular to the viewing direction selected from the focus ① (for example, in Fig. 6, the direction from the focus ① to the focus P3), and the obtained perspective view. The range of is equally extended on the plane around the line in the viewing direction selected from the focal point ①. However, according to the present invention, by using equations (5) and (6), an arbitrary perspective view can be obtained from the plane of the three-dimensional space covered by the optical system of the omnidirectional visual sensor 210, so that this perspective It is clear that the resulting plane can form any angle with respect to the selected viewing angle. For example, if the omnidirectional visual sensor 210 is generally located at the corner of the rectangular vehicle 100 rather than on the roof of the vehicle 100, the range of the obtained perspective view is equally extended around the line in the viewing direction selected from the focal point ①. Therefore, the image display on the display screen of the display unit 240 cannot display the vehicle 100 in the center of the display screen of the display unit 240. [145] In this case, the image display on the display unit 240 is shifted so that the vehicle 100 appears in the center of the image by shifting the variable ranges of the parameters W and h in the above formulas (15), (16) and (17). Can be. For example, consider the case where the width of the vehicle 100 is 21w. In this case, in the formulas (15), (16) and (17), the range of the image plane is shifted vertically and horizontally by (μ, ν) (where 1w = √μ 2 + ν 2 ), that is, the image The range of the plane is shifted from "W to -W (width)" and "h to -h (height)" to "W + μ to -W + μ" and "h + ν to -h + ν", respectively. As a result, the vehicle 100 appears in the center of the image displayed on the display unit 240. Such processing can be realized by adding μ and v to the parameters W and h, respectively, in the conversion processing of the image processing unit 230. [146] 2, the display unit 240 is a monitor using, for example, a cathode ray tube, an LCD, an EL, and the like. The display unit 240 receives an output from the output buffer memory 234 of the image processing unit 230 and displays an image based on the received output. During this image display operation period, the display control unit 250 including the microcomputer can select an image (a panoramic image and / or a perspective image converted by the image processing unit 230) displayed on the display unit 240, or The direction and size of the displayed image can be controlled. [147] 7 shows a display screen 710 of the display unit 240. [148] In Fig. 7, the display screen 710 includes: a first perspective image display window 720; A second perspective image display window 730; A third perspective image display window 740; And a panoramic image display window 750. In the default state, the first perspective image display window 720 displays the front view perspective image from the vehicle 100; The second perspective image display window 730 displays a left field of view perspective image from the vehicle 100; The third perspective image display window 740 displays the right field of view perspective image from the vehicle 100. The panoramic image display window 750 represents a panoramic image representing the entire periphery of the vehicle 100. [149] The display screen 710 includes: a first description display window 725 indicating a description of the first perspective image display window 720; A second description display window 735 indicating a description of the second perspective image display window 730; A third description display window 745 showing a description of the third perspective image display window 740; And a fourth description display window 755 indicating a description of the panoramic image display window 750. [150] The display screen 710 includes: a direction key 760 for scrolling the displayed perspective image vertically / horizontally; An enlarged key 770 for enlarging an image; And a reduction key 780 for reducing the image. [151] The first to fourth description display windows 725, 735, 745 and 755 respectively function as switches for activating the image display windows 720, 730, 740 and 750. By activating a desired image display window (windows 720, 730, 740, 750) by the above description display window (windows 725, 735, 745, 755) which functions as a switch, the user can scroll the image displayed in the activated window vertically or horizontally. Can be zoomed in and out. In addition, it is possible to indicate whether the image display windows (windows 720, 730, 740, 750) are activated or not by changing the display color of the description display windows (windows 725, 735, 745, 755). The user can translate (vertically / horizontally) scroll the image displayed in each of the perspective image display windows 720, 730, and 740 by using the direction keys 760, the magnification keys 770, and the reduction keys 780, or zoom in. Can be reduced. The user can translate (vertically / horizontally) scroll or zoom in / out the image displayed in the panoramic image display window 750 by using the direction key 760. However, the image displayed in the panoramic image display window 750 is not enlarged or reduced. [152] For example, when the user touches the first description display window 725, a signal is output to the display control unit 250 (FIG. 2). In response to the touch, the display controller 250 changes the display color of the first explanatory display window 725 or causes the first explanatory display window 725 to flicker to display the first perspective image display window. 720 indicates that it is activated. Meanwhile, the first perspective image display window 720 is activated so that the user can vertically / horizontally display the image displayed on the window 720 by using the direction key 760, the enlargement key 770, and the reduction key 780. You can scroll and zoom in and out. In particular, the signal is sent from the direction key 760, the magnification key 770, and the reduction key 780 to the image conversion section 232 of the image processing section 230 (Fig. 2) via the display control section 250. According to the signals from the direction key 760, the enlargement key 770, and the reduction key 780, the image is converted, and the converted image is transmitted to the display portion 240 (FIG. 2), and the display portion 240 Displayed on the screen. [153] The display screen 710 further includes an omnidirectional visual sensor switching key 790. [154] For example, based on the operation of the switching key 790 by the driver, the driver uses the omnidirectional visual sensor switching key 790 located on the display screen 710 to output a signal from the display controller 250 to the image processor 230. ) And the display unit 240, so that switching is performed between the omnidirectional omnidirectional visual sensor (210A in FIGS. 1C and 1D) and the omnidirectional omnidirectional visual sensor (210B in FIGS. 1C and 1D) so that the visual sensor is selected. . When the rear omnidirectional visual sensor 210B is selected, for example, an image obtained from the rear omnidirectional visual sensor 210B is displayed. Thereafter, for example, the first perspective image display window 720 is selected from the perspective image display windows 720, 730, and 740, and the image display of the window 720 is tilted to -90 ° by the direction key 760, so that the vehicle 100A A bird's-eye view image, which is seen from the position just above the rear portion of, is obtained as above. An example of such a bird's eye view image is shown in FIG. [155] 8 shows another example of the display screen 810 of the display unit 240. [156] As shown in FIG. 8, the display screen 810 may display only the enlarged first perspective image display window 830. The first explanatory display window 820 indicates that the first perspective image display window 830 represents a rear bird's eye image of the rear portion of the vehicle 100A and its surroundings. This display of a bird's eye view may be useful when the driver needs to check the distance between his or her own vehicle and an adjacent vehicle or obstacle: for example, when the driver parks the vehicle in a garage or parking lot or drives a vehicle from the garage or parking lot, Or when the driver parks or stops the vehicle as close as possible to the edge of an adjacent vehicle or obstacle. [157] In the example shown in Fig. 8, the omnidirectional visual sensor 210B is located at the corner of the vehicle 100 (this configuration will be described later in detail in the second embodiment). In this case, about one quarter (about 90 degrees) of the field of view of the omnidirectional visual sensor 210B (region 860 in FIG. 8) is blocked by the vehicle 100. That is, the field of view of the omnidirectional visual sensor 850 is about 270 °, which is about 180 ° of the rear field of view of the omnidirectional visual sensor 210B (see FIGS. 1C and 1D) located at the center of the rear bumper 120. Wider than [158] In addition, each image displayed on the second perspective image display window 730 and the third perspective image display window 740 (FIG. 7) may be horizontally shifted by any angle within a 360 ° rotatable range. For example, when the image displayed on the second perspective image display window 730 or the third perspective image display window 740 is rotated horizontally by 90 °, the front or rear perspective image of the vehicle 100 can be obtained. Further, the display portion 240 is configured so that the display screen can be switched between the display screen 710 (FIG. 7) and the display screen 810 (FIG. 8) by a single operation of the additional switch provided on the display screen. can do. [159] (Example 2) [160] 9A is a plan view showing a vehicle 900 including a perimeter monitoring device 1000 for a moving body according to Embodiment 2 of the present invention. 9B is a side view of the vehicle 900. [161] The difference between the vehicle 900 of Example 2 and the vehicle 100A of Example 1 is that the omnidirectional visual sensor 1010A is located on the front right corner of the vehicle 900, and the omnidirectional visual sensor 1010B is the sensor 1010A. It is located on the rear left corner of the vehicle 900 in a diagonal direction with respect to. [162] Each omnidirectional vision sensor 1010A, 1010B generally has an omnidirectional field of view with a 360 ° field of view around it in the horizontal direction. However, a quarter of the field of view of the omnidirectional visual sensor 1010A (the left half (90 °) of the rear field of view) is blocked by the vehicle 900. That is, the effective field of view of the omnidirectional visual sensor 1010A is limited to the front field of view of about 270 °. Similarly, a quarter of the field of view of the omnidirectional visual sensor 1010B (right half (90 °) of the front field of view) is blocked by the vehicle 900. That is, the effective field of view of the omnidirectional visual sensor 1010B is limited to approximately 270 ° rearward field of view. Therefore, the two omnidirectional vision sensors 1010A and 1010B allow the omnidirectional vision sensors 1010A and 1010B to see the blind spot of the vehicle 100 of the first embodiment, which is very close to the vehicle 900. An about 360 ° field of view image around the entire vehicle 900 may be obtained. [163] For example, with reference to the example shown in FIG. 7, consider the case where the driver selects the first perspective image display window 720 and uses the direction key 760 to display a bird's-eye view image of the vehicle 900. In this case, when the image data obtained through the omnidirectional visual sensor 1010A is used without special conversion, the vehicle 900 cannot be displayed in the center of the display screen. According to this embodiment, when this happens, image data obtained through the omnidirectional visual sensor 1010A is displayed so that the vehicle 900 is displayed at a predetermined position (e.g., the center) of the display screen shown in FIG. The conversion may be performed according to the conversion method described in Embodiment 1. [164] 10 shows an exemplary display screen 1110 according to the second embodiment. In the second embodiment, the image processing unit 230 or the display unit 240 is a bird's eye view image obtained through the omnidirectional vision sensor 1010A located at the front right corner and the omnidirectional vision located at the left rear corner diagonal to the sensor 1010A. A bird's-eye view image obtained through the sensor 1010B can be synthesized, so that the surroundings of the vehicle 900 can be displayed on the display screen 1110 at once as shown in FIG. [165] However, when the omnidirectional visual sensor is positioned higher than the main body (or roof) of the vehicle, it is possible to obtain a bird's eye view image of the vehicle and its surroundings, which are viewed from the position directly above the vehicle. However, if the omnidirectional vision sensor is positioned lower than the vehicle's main body (or roof), the omnidirectional vision sensor can only obtain an image representing the side (s) of the vehicle. In this case, in order to display a bird's eye view image representing the entire vehicle, a planar image of the vehicle and its surroundings pre-captured from the position directly above the vehicle or by using computer graphics software is prepared, and The planar image can be displayed by a superimposition method at a predetermined position in the image displayed on the display screen, and can be combined with the image obtained through the omnidirectional visual sensor. By this display function, the driver of the vehicle can easily check around the vehicle. For example, at a parking place, the driver of the vehicle can easily determine the distance between the vehicle and the white line or obstacle. feeling [166] Also, as shown in Fig. 11, when the images obtained through the omnidirectional visual sensors 1010A and 1010B located at the diagonal corners of the vehicle 900 are synthesized, both the sensors 1010A and 1010B are the area 1 and the area. The image of (4) can be obtained. In this case, the image of the area 1 (or the area 4) is displayed by selectively using an image obtained from the sensor 1010A and an image obtained from the sensor 1010B. However, if such a display method is used, the interface between the image obtained from the selected omnidirectional visual sensor and the image obtained from the other omnidirectional visual sensors (e.g., region 1 or 4 and the region) by the difference in the viewing angle between the sensors 1010A and 1010B is used. Visual discontinuity in the interface between 2 or 3). This visual discontinuity causes difficulties and discomfort for the driver of the vehicle 900 viewing the displayed image. [167] In order to avoid this problem, a switching key 790 is provided on the display screen connected to the display control unit 250. In response to the switching operation of the driver using the switching key 790, a signal is transmitted to the display controller 250. According to the signal from the switching key 790, one of the sensors 1010A and 1010B is selected to display the image of the region 1 or 4 so that the image of the region in which the driver is very careful and the image of the region adjacent to the region is combined. This provides the driver with a smoothly synthesized display image. For example, if the driver of the vehicle 900 pays more attention to the area between the area 1 and the area 3 compared to the area between the area 1 and the area 2, the area 1 and the area 2 Visual discontinuity at the interface between the), while the driver uses the switching key 790 to switch the sensor 1010A such that the displayed image is visually smooth at the interface between the region 1 and region 3. The image of the area | region 1 obtained through can be selected. Conversely, the operator switches so that a visual discontinuity occurs at the interface between the area 1 and the area 3, while the image displayed at the interface between the area 1 and the area 2 is visually smooth. The key 790 can be used to select an image of the area 1 obtained through the sensor 1010B. [168] (Example 3) [169] 12A is a plan view showing the configuration of a vehicle 1200 incorporating a periphery monitoring device 1300 according to a third embodiment of the present invention. 12B is a side view of the vehicle of FIG. 12A. [170] 12A and 12B show that the ambient monitoring device 1300 is provided at a position (eg, the front portion of the roof) of the vehicle 1200 in which the temperature measuring unit 270 is optimal for measuring the ambient temperature. It differs from the surrounding monitoring apparatus 1000 in the point. [171] 13 is a block diagram showing the configuration of the surrounding monitoring device 1300 according to the third embodiment. The ambient monitoring device 1300 is different from the ambient monitoring device 200 of FIG. 2 in that the arithmetic / control unit 1320 of the device 1300 includes a temperature measuring unit 270. [172] In Embodiment 3, as described above, a temperature measuring section 270 is provided at a location on the exterior surface of the vehicle 1200 (eg, the front portion of the roof) that is optimal for measuring the ambient temperature. The temperature measuring unit 270 is connected to the display control unit 270 of the surrounding monitoring device 1300 through a cable. When the engine of the vehicle 1200 is started, if the measurement result of the temperature measuring unit 270 is equal to or less than a predetermined temperature, the display control unit 250 may, for example, based on the output 256 from the measuring unit 270. Like the bird's eye view image shown in Fig. 10, an image showing the periphery of the vehicle 1200 at a time is automatically displayed on a single display screen for an arbitrary period. With such a configuration, even when the ambient temperature of the vehicle 1200 is less than or equal to a predetermined temperature, fog occurs on the window glass of the vehicle 1200 and the driver is prohibited from viewing the window by the misted window. The safety around the vehicle 1200 can be easily checked. [173] In Examples 1 to 3, the omnidirectional vision sensor is located on the roof or bumper of the vehicle, but may be located in the hood, side mirrors, or elsewhere in the vehicle. In Examples 1 to 3, the same passenger car as the vehicle was described. However, the present invention is not limited to this, and can be applied to large vehicles such as buses and vehicles for cargo. In particular, the present invention is useful in freight vehicles because in many freight vehicles, the driver's field of view in the rearward direction of the vehicle is blocked by the cargo compartment. The present invention can also be applied to airplanes and general mobile robots. [174] As described above, according to the present invention, the blind spot of the driver can be easily observed by the omnidirectional visual sensor (s) being located, for example, on the bumper (s), corner (s), etc. of the vehicle. By such a system, the driver does not need to switch the plurality of cameras or change the orientation of the camera in order to select one of the display cameras on the display device as in the conventional vehicle monitoring apparatus. Thus, when the driver starts driving, when the vehicle turns to the right or left, or when the driver parks the vehicle in the garage or parking lot or drives the vehicle from the garage or parking lot, the driver checks the safety around the vehicle, Safe driving can be realized. [175] In addition, the driver can select a desired display image to change the display direction or the size of the image. In particular, when the driver parks the vehicle in the garage or parking lot or drives the vehicle from the garage or the parking lot, or when the driver parks or stops the vehicle as close to the adjacent vehicle or obstacle as possible, by switching the display to the bird's eye view display, The safety around the vehicle can be easily checked (e.g., the distance between one's own vehicle and an adjacent vehicle or obstacle can be easily checked), thereby preventing contact accident (s) and the like. [176] Various other changes may be readily made by those skilled in the art without departing from the scope and spirit of the invention. Accordingly, the appended claims should not be limited to the content described herein, but should be construed broadly.
权利要求:
Claims (19) [1" claim-type="Currently amended] At least an image portion having an optical system capable of obtaining an image over an entire omnidirectional field of view and converting the image into an optical image, and an imaging lens for converting the optical image obtained by the optical system into image data One omnidirectional field sensor; Image processing means for converting the image data into at least one of panoramic image data and perspective image data; A display unit for displaying one of a panoramic image corresponding to the panoramic image data and a perspective image corresponding to the perspective image data; And A periphery monitoring system comprising an omnidirectional vision system composed of a display control unit for controlling the display unit, the periphery monitoring system mounted on the moving body to monitor the periphery around the moving body, The optical system includes a hyperbolic mirror having a shape of one of two sheet hyperbolic surfaces, an optical axis of the hyperbolic mirror coincides with an optical axis of an imaging lens, and a principal point of the imaging lens is the hyperbolic mirror Is placed in one of the focal points of And the display unit displays a perspective image converted from a bird's eye view image of the moving object and its surroundings. [2" claim-type="Currently amended] An ambient monitoring system according to claim 1, wherein said at least one omnidirectional visual sensor is arranged such that a bird's eye view image of the moving object and its surroundings is converted into image data. [3" claim-type="Currently amended] The ambient monitoring system according to claim 1, wherein the display unit simultaneously or selectively displays a panoramic image and a perspective image. [4" claim-type="Currently amended] The ambient surveillance system according to claim 1, wherein the display unit displays an image displayed in a direction opposite to the most likely moving direction of the moving object. [5" claim-type="Currently amended] An ambient monitoring system according to claim 1, wherein said image processing means converts image data corresponding to a first area in an omnidirectional field of view around an optical system into first perspective image data. [6" claim-type="Currently amended] The image processing unit according to claim 5, wherein in response to the control by the display control unit, the image processing unit stores image data corresponding to the second area in the omnidirectional field of view around the optical system that does not overlap with the first area. A periphery monitoring system for converting second mismatched perspective image data. [7" claim-type="Currently amended] 7. The periphery monitoring system according to claim 6, wherein the second area is the same as the area obtained by performing at least one of a translational movement process and a zoom-in / zoom-out process on the first area. [8" claim-type="Currently amended] The ambient monitoring system of claim 1, wherein the optical system is arranged such that its optical axis is perpendicular to the most likely direction of movement of the moving object. [9" claim-type="Currently amended] The peripheral device according to claim 1, wherein in response to the control by the display control unit, the display unit displays an image representing the mobile unit on the display screen of the display unit such that the mobile unit is shown at a predetermined position of an image displayed on the display screen. Surveillance system. [10" claim-type="Currently amended] The display apparatus of claim 1, wherein the display unit is configured to simultaneously display an image viewed in a direction opposite to the most likely moving direction of the moving object and an image viewed in a direction that is not the same or opposite to the most likely moving direction of the moving object. Surveillance system. [11" claim-type="Currently amended] An ambient monitoring system according to claim 1, wherein said moving body is a vehicle. [12" claim-type="Currently amended] 12. The vehicle according to claim 11, wherein the vehicle comprises a first bumper provided on the most probable direction of movement of the vehicle and a second bumper provided on the vehicle part or side opposite to the most probable direction of movement, And the at least one omnidirectional visual sensor comprises a first omnidirectional visual sensor provided on the first bumper and a second omnidirectional visual sensor provided on the second bumper. [13" claim-type="Currently amended] 13. The method of claim 12, wherein the first omnidirectional visual sensor is disposed at one of the right and left ends of the first bumper with respect to the most likely direction of movement of the vehicle, And the second omnidirectional visual sensor is disposed on one end of the second bumper in a position diagonal to the end of the first bumper where the first omnidirectional visual sensor is disposed relative to the body of the vehicle. [14" claim-type="Currently amended] The ambient monitoring system according to claim 13, wherein the display unit displays an image obtained by combining a first perspective image derived from the first omnidirectional visual sensor and a second perspective image derived from the second omnidirectional visual sensor. [15" claim-type="Currently amended] The image processing apparatus according to claim 1, wherein the image processing unit includes a storage unit that stores image data of a moving object, The image processing unit combines the image data of the moving object in the storage unit and the perspective image data derived from the optical system, And the display unit displays a perspective image including an image representing a moving object according to the combined image data. [16" claim-type="Currently amended] 16. The periphery monitoring system according to claim 15, wherein the image data of the moving object is image data formed using computer graphics software. [17" claim-type="Currently amended] The ambient monitoring system according to claim 15, wherein the image data of the moving object is image data obtained by capturing an image of the moving object. [18" claim-type="Currently amended] According to claim 1, The omnidirectional vision system further comprises a temperature measuring unit for measuring the ambient temperature of the moving body, And the display unit displays a perspective bird's eye view image of the movable body and its surroundings after the movable body becomes movable when the ambient temperature measured by the temperature measuring unit is equal to or less than a predetermined temperature. [19" claim-type="Currently amended] The display area of the moving object obtained through the first omnidirectional visual sensor and the perspective bird's eye view image around the moving object obtained through the first omnidirectional visual sensor and the perspective bird's eye view image of the moving object obtained through the second omnidirectional visual sensor. When displaying a perspective image of an overlapping region between display regions, the display unit displays a perspective image derived from one of the first omnidirectional visual sensor and the second omnidirectional visual sensor under control by a display control unit.
类似技术:
公开号 | 公开日 | 专利标题 US9646572B2|2017-05-09|Image processing apparatus US20160098815A1|2016-04-07|Imaging surface modeling for camera modeling and virtual view synthesis US20200055454A1|2020-02-20|Vehicular display system with multi-paned image display KR101741433B1|2017-05-30|Driver assistance apparatus and control method for the same JP5836490B2|2015-12-24|Driving assistance device EP2794353B1|2016-06-29|Vehicle rear monitoring system RU147024U1|2014-10-27|Rear view system for vehicle KR101295295B1|2013-08-12|Image processing method and image processing apparatus KR20150052148A|2015-05-13|Rearview imaging systems for vehicle EP3024700B1|2018-05-02|Method and device for reproducing a lateral and/or rear surrounding area of a vehicle KR101446897B1|2014-10-06|Vehicle periphery monitoring device US7212653B2|2007-05-01|Image processing system for vehicle US20130128049A1|2013-05-23|Driver assistance system for a vehicle JP5099451B2|2012-12-19|Vehicle periphery confirmation device US7538795B2|2009-05-26|Monitor device for moving body EP1170173B1|2006-11-08|Picture composing apparatus and method EP1270329B2|2016-11-23|Monitoring system US8836787B2|2014-09-16|Image generating apparatus and image display system DE60310799T2|2007-04-26|Image display device and method for a vehicle JP5251947B2|2013-07-31|Image display device for vehicle JP4752486B2|2011-08-17|Imaging device, video signal selection device, driving support device, automobile DE102013221027A1|2014-06-12|Display and procedures that are suitable for moving an image US7136091B2|2006-11-14|Vehicle imaging apparatus, vehicle monitoring apparatus, and rearview mirror US7969294B2|2011-06-28|Onboard display device, onboard display system and vehicle US8044781B2|2011-10-25|System and method for displaying a 3D vehicle surrounding with adjustable point of view including a distance sensor
同族专利:
公开号 | 公开日 JP2002218451A|2002-08-02| DE60101440T3|2008-04-17| DE60101440T2|2004-09-23| EP1197937B1|2003-12-10| DE60101440D1|2004-01-22| KR100650121B1|2006-11-24| EP1197937A1|2002-04-17| EP1197937B2|2007-08-01| JP3773433B2|2006-05-10| US7295229B2|2007-11-13| US20020080017A1|2002-06-27|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
法律状态:
2000-10-11|Priority to JPJP-P-2000-00311206 2000-10-11|Priority to JP2000311206 2001-10-09|Priority to JP2001310887A 2001-10-09|Priority to JPJP-P-2001-00310887 2001-10-11|Application filed by 마찌다 가쯔히꼬, 샤프 가부시키가이샤 2002-04-17|Publication of KR20020028853A 2004-09-01|First worldwide family litigation filed 2006-11-24|Application granted 2006-11-24|Publication of KR100650121B1
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 JPJP-P-2000-00311206|2000-10-11| JP2000311206|2000-10-11| JP2001310887A|JP3773433B2|2000-10-11|2001-10-09|Ambient monitoring device for moving objects| JPJP-P-2001-00310887|2001-10-09| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|